点云的3D场景流量估计是计算机视觉中的低级3D运动感知任务。流嵌入是场景流估计中的一种常用技术,它编码两个连续帧之间的点运动。因此,对于流动嵌入捕获运动的正确总体方向是至关重要的。但是,以前的作品仅在本地搜索以确定软信号,而忽略了遥远的点,而遥远的点是实际匹配的点。另外,估计的对应关系通常来自相邻点云的正向,并且可能与从向后方向获得的估计对应关系不一致。为了解决这些问题,我们提出了一个新颖的全能嵌入层,并在初始场景流量估计期间具有向后的可靠性验证。此外,我们研究并比较了3D场景流网络的关键组件中的几个设计选择,包括点相似度计算,预测变量的输入元素以及预测变量和改进级别的设计。仔细选择了最有效的设计后,我们能够提出一个模型,该模型可以在FlyingThings3D和Kitti场景流数据集上实现最新性能。我们提出的模型超过了所有现有方法的FlyterThings3D数据集至少38.2%,而EPE3D Metric的Kitti场景流数据集则超过了24.7%。我们在https://github.com/irmvlab/3dflow上发布代码。
translated by 谷歌翻译
Surround-view fisheye perception under valet parking scenes is fundamental and crucial in autonomous driving. Environmental conditions in parking lots perform differently from the common public datasets, such as imperfect light and opacity, which substantially impacts on perception performance. Most existing networks based on public datasets may generalize suboptimal results on these valet parking scenes, also affected by the fisheye distortion. In this article, we introduce a new large-scale fisheye dataset called Fisheye Parking Dataset(FPD) to promote the research in dealing with diverse real-world surround-view parking cases. Notably, our compiled FPD exhibits excellent characteristics for different surround-view perception tasks. In addition, we also propose our real-time distortion-insensitive multi-task framework Fisheye Perception Network (FPNet), which improves the surround-view fisheye BEV perception by enhancing the fisheye distortion operation and multi-task lightweight designs. Extensive experiments validate the effectiveness of our approach and the dataset's exceptional generalizability.
translated by 谷歌翻译
Monocular 3D object detection is a low-cost but challenging task, as it requires generating accurate 3D localization solely from a single image input. Recent developed depth-assisted methods show promising results by using explicit depth maps as intermediate features, which are either precomputed by monocular depth estimation networks or jointly evaluated with 3D object detection. However, inevitable errors from estimated depth priors may lead to misaligned semantic information and 3D localization, hence resulting in feature smearing and suboptimal predictions. To mitigate this issue, we propose ADD, an Attention-based Depth knowledge Distillation framework with 3D-aware positional encoding. Unlike previous knowledge distillation frameworks that adopt stereo- or LiDAR-based teachers, we build up our teacher with identical architecture as the student but with extra ground-truth depth as input. Credit to our teacher design, our framework is seamless, domain-gap free, easily implementable, and is compatible with object-wise ground-truth depth. Specifically, we leverage intermediate features and responses for knowledge distillation. Considering long-range 3D dependencies, we propose \emph{3D-aware self-attention} and \emph{target-aware cross-attention} modules for student adaptation. Extensive experiments are performed to verify the effectiveness of our framework on the challenging KITTI 3D object detection benchmark. We implement our framework on three representative monocular detectors, and we achieve state-of-the-art performance with no additional inference computational cost relative to baseline models. Our code is available at https://github.com/rockywind/ADD.
translated by 谷歌翻译
Model distillation has been a popular method for producing interpretable machine learning. It uses an interpretable "student" model to mimic the predictions made by the black box "teacher" model. However, when the student model is sensitive to the variability of the data sets used for training, the corresponded interpretation is not reliable. Existing strategies stabilize model distillation by checking whether a large enough corpus of pseudo-data is generated to reliably reproduce student models, but methods to do so have so far been developed for a specific student model. In this paper, we develop a generic approach for stable model distillation based on central limit theorem for the average loss. We start with a collection of candidate student models and search for candidates that reasonably agree with the teacher. Then we construct a multiple testing framework to select a corpus size such that the consistent student model would be selected under different pseudo sample. We demonstrate the application of our proposed approach on three commonly used intelligible models: decision trees, falling rule lists and symbolic regression. Finally, we conduct simulation experiments on Mammographic Mass and Breast Cancer datasets and illustrate the testing procedure throughout a theoretical analysis with Markov process.
translated by 谷歌翻译
kNN-MT presents a new paradigm for domain adaptation by building an external datastore, which usually saves all target language token occurrences in the parallel corpus. As a result, the constructed datastore is usually large and possibly redundant. In this paper, we investigate the interpretability issue of this approach: what knowledge does the NMT model need? We propose the notion of local correctness (LAC) as a new angle, which describes the potential translation correctness for a single entry and for a given neighborhood. Empirical study shows that our investigation successfully finds the conditions where the NMT model could easily fail and need related knowledge. Experiments on six diverse target domains and two language-pairs show that pruning according to local correctness brings a light and more explainable memory for kNN-MT domain adaptation.
translated by 谷歌翻译
在线持续学习(OCL)旨在通过单个通过数据从非平稳数据流进行逐步训练神经网络。基于彩排的方法试图用少量的内存近似观察到的输入分布,并以后重新审视它们以避免忘记。尽管具有强烈的经验表现,但排练方法仍然遭受了过去数据损失景观和记忆样本的差异。本文重新讨论了在线设置中的排练动态。我们从偏见和动态的经验风险最小化的角度从固有的内存过度拟合风险中提供了理论见解,并检查重复排练的优点和限制。受我们的分析的启发,一个简单而直观的基线,重复的增强彩排(RAR)旨在解决在线彩排的拟合不足的困境。令人惊讶的是,在四个相当不同的OCL基准测试中,这种简单的基线表现优于香草排练9%-17%,并且显着改善了基于最新的彩排方法miR,ASER和SCR。我们还证明,RAR成功地实现了过去数据的损失格局和其学习轨迹中的高损失山脊厌恶的准确近似。进行了广泛的消融研究,以研究重复和增强彩排和增强学习(RL)之间的相互作用(RL),以动态调整RAR的超参数以平衡在线稳定性 - 塑性权衡折衷。
translated by 谷歌翻译
无穷小夹刀是一种估计参数模型方差的通用方法,最近也用于某些集合方法。在本文中,我们扩展了无穷小折刀,以估计任意两种模型之间的协方差。这可用于量化模型组合的不确定性,或构建测试统计信息,以比较使用相同训练数据集拟合的模型的不同模型或组合。本文中的具体示例使用了随机森林和M估计剂等模型的增强组合。我们还研究了其在XGBOOST模型的神经网络和集合上的应用。我们通过广泛的模拟及其在北京住房数据中的应用来说明差异估计的疗效,并证明了无穷小折刀协方差估算的理论一致性。
translated by 谷歌翻译
上下文匪徒旨在根据其上下文信息在一组最佳奖励的武器中识别最佳奖励。由于武器通常表现出群体行为和群体之间存在相互影响的事实,我们引入了一个新模型,ARM组图(AGG),节点代表武器组和加权边缘组成组之间的相关性。为了利用丰富的信息,我们提出了一种强盗算法,即ag-ucb,在该算法中,神经网络旨在估计奖励,我们建议利用图形神经网络(GNN)来学习具有相关性的ARM组的表示。为了解决匪徒中的剥削 - 探索困境,我们得出了建立在神经网络(剥削)探索的新的上置信度结合(UCB)。此外,我们证明了Agg-UCB可以实现与过度参数化的神经网络结合的近乎最佳的遗憾,并提供GNN的收敛分析,并具有完全连接的层,这可能具有独立的利益。最后,我们对多个公共数据集的最新基准进行了广泛的实验,显示了拟议算法的有效性。
translated by 谷歌翻译
有很好的参数来支持声明,特征表示最终从一般到深度神经网络(DNN)的特定转换,但这种转变仍然相对缺乏缺陷。在这项工作中,我们向理解特征表示的转换来移动一个微小的步骤。我们首先通过分析中间层中的类分离,然后将类别分离过程作为动态图中的社区演变进行了描述。然后,我们介绍模块化,是图形理论中的常见度量,量化社区的演变。我们发现,随着层更深,而是下降或达到特定层的高原,模块化趋于上升。通过渐近分析,我们表明模块化可以提供对特征表示转换的定量分析。通过了解特征表示,我们表明模块化也可用于识别和定位DNN中的冗余层,这为图层修剪提供了理论指导。基于这种鼓舞人心的发现,我们提出了一种基于模块化的层面修剪方法。进一步的实验表明,我们的方法可以修剪冗余层,对性能的影响最小。该代码可在https://github.com/yaolu-zjut/dynamic-graphs-construction中获得。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译